home | section main page


differential equation

Table of Contents

1. Introduction

A differential equation is an equation whose solutions are functions and which incorporate derivatives of the function you're solving for. Differential equations often have a family of infinite solutions, where a general solution for a differential equation incorporates many particular solutions. Particular solutions to differential equations are specific functions, corresponding to a single choice of an initial value problem. Therefore, general solutions tell you how to solve initial value problems.

Differential equations are often used to model real world systems, and are the main tool in numerical simulations of said systems.

1.1. ODE

An ODE is a differential equation involving single variable function solution classes and their derivatives.

1.2. PDE

A PDE is a differential equation involving multivariable function solution classes and generally involve partial derivatives of the unknown function.

1.3. initial value problem

An initial value problem is a problem where one is given a differential equation and particular values of the unknown function and particular values of its derivatives, and the result is a particular solution.

1.4. separable differential equation

For ODEs, separable differential equations are differential equations of the form:

\begin{align} \label{} \frac{dy}{dx} = f(y)g(x) \end{align}

which can be solved by taking an integral on both sides:

\begin{align} \label{} \frac{dy}{f(y)} = g(x)dx \\ \int\frac{dy}{f(y)} = \int g(x)dx \end{align}

evaluating the integrals and solving for \(y\), you obtain solutions for \(y\) in terms of \(x\).

1.5. differential equation

Linear differential equations are differential equations of the form:

\begin{align} \label{} [\sum_{i}f_{i}(x)D^{i}]y(x) = g(x) \end{align}

where \(D\) is the operator. They are linear because the unknown function \(y(x)\) is being operated on by some linear operator, and common methods in linear algebra can be used in order to analyze equations of this kind. For example, a first-order linear differential equation would look like this:

\begin{align} \label{} [f(x)D + g(x)]y(x) = h(x) \end{align}

which can be easily solved in the following way where \(G(x) = \frac{g(x)}{f(x)}\) and \(H(x) = \frac{h(x)}{f(x)}\):

\begin{align} \label{} [D + G(x)]y(x) = H(x) \\ \mu(x)[D + G(x)]y(x) = \mu(x)H(x) \\ \mu'(x) := G(x)\mu(x) \\ D(\mu(x)y(x)) = \mu(x)H(x) \\ y(x) = \frac{\int\mu(x)H(x)dx}{\mu(x)} \end{align}

Now to solve for \(\mu(x)\), using separable differential equation methods:

\begin{align} \label{} \frac{d\mu}{dx} = G(x)\mu(x) \\ \frac{1}{\mu}d\mu = G(x)dx \\ \int\frac{1}{\mu}d\mu = \int G(x)dx \\ ln(\mu) = \int G(x)dx \\ e^{\int G(x)dx} = \mu \end{align}

Therefore:

\begin{align} \label{} y(x) = \frac{\int e^{\int G(x)dx}H(x)dx}{e^{\int G(x)dx}} \end{align}

Then, to model any particular first order system, plug in functions for \(G(x)\) and \(H(x)\).

1.5.1. superposition principle

The principle of superposition states that any solutions \(f_i(x)\) add to a new solution:

\begin{align} \label{} \sum_{i=0}^{N}f_{i}(x) = f_{new}(x) \end{align}

that also satisfies the linear differential equation. This works because the operator is , so additive properties work over this space.

1.5.2. Higher Order Linear Differential Equations

Solving higher order linear differential equations requires a couple of tricks. For example, transforms such as the or the may be used. Such transforms reduce differential equation problems to algebraic problems, thus simplifying their solution methods. Other methods include guessing (I'm not pulling your leg here, this is real), formulation as an eigenvalue problem, and taylor polynomial solutions. We will take a look at all of these in this section.

1.5.3. Homogeneous Case

Take the case \(Ay'' + By' + Cy = 0\), the substitute the form \(y = De^{kt}\). Then:

\begin{align} \label{} De^{kt}(Ak^{2} + Bk + C) = 0 \\ Ak^{2} + Bk + C = 0 \end{align}

Then, use the quadratic formula to solve for \(k\) in terms of the other constants. Such a polynomial is called the characteristic polynomial of this differential equation.

1.5.4. Eigenvalue Problems

Eigenvalue problems can be solved just like in the familiar linear algebra case. For instance, take some differential equation in this form:

\begin{align} \label{} A(f) = \lambda f \end{align}

where \(A\) is a linear operator in space, and \(\lambda\) is any constant. Traditionally, one would solve such an eigenvalue problem like so:

\begin{align} \label{} \det{(A - \lambda I)} = 0 \end{align}

In the simple example of a polynomial basis, this function \(f\) can be represented as some linear combination of linearly independent polynomials. A simple basis to choose could be the Taylor series basis i.e. \(\vec{e_{n}} = x^{n}\) where \(e_{n}\) is the nth basis vector. Note that there are many polynomial bases that are an orthogonal basis and span this subset of function space, but this is a simple example. In this case, the matrix \(A\) would represent an operation on an infinite polynomial, and the \(\lambda I\) tells you to subtract \(\lambda\) from all its diagonals. You can interpret this literally, using the following example.

1.5.4.1. Example
\begin{align} \label{} D(r^{2}D(f(r)) = \lambda f(r) \end{align}

is such an example of an eigenvalue problem. Now, using the Taylor basis, we need to know two things: what \(D\) is as an infinite dimensional matrix in this basis, and what \(t^{2}\) is as an infinite dimensional matrix. \(f\) is some unknown vector we are trying to solve for in this system. Note this observation:

\begin{align} \label{} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 4 \\ \end{pmatrix} \begin{pmatrix} a \\ b \\ c \\ d \\ e \end{pmatrix} = \begin{pmatrix} b \\ 2c \\ 3d \\ 4e \end{pmatrix} \end{align}

That this matrix encodes the power rule for the Taylor eigenbasis for 4 dimensions; each entry in the vectors encodes the nth power monomial term, which means, for example:

\begin{align} \label{} \begin{pmatrix} a \\ b \\ c \\ d \\ e \end{pmatrix} := a + bx + cx^{2} + dx^{3} + ex^{4} \end{align}

then the derivative of this vector would be:

\begin{align} \label{derivative} b + 2cx + 3dx^{2} + 4ex^{3} \end{align}

which is exactly the coefficients in the resultant vector! Now, if we generalize this to an infinite amount of dimensions (where the vector has an infinite length and the matrix has infinite entries), this corresponds to the same effect.

Thus, the infinite matrix with the off-diagonal increasing entries is the \(D\) matrix, or \(D\) operator. But what is the \(r^{2}\) operator? We know it must be a matrix operation that shifts the entire vector up by two, and pads the first two entries of the vector with two zeros. If we find this matrix, the matrix multiplication \(DRD\) should yield a new infinite matrix \(M\), which we can use in order to solve the eigenvalue problem \(\det{(M - \lambda I)} = 0\). Now this matrix is:

\begin{align} \label{R matrix} \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ \end{pmatrix} \end{align}

and so on. This matrix does the exact same thing to a polynomial vector as what multiplying \(r^{2}\) does to a polynomial. We then multiply the two matrices to get this new matrix:

\begin{align} \label{new} \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 \\ \end{pmatrix} \end{align}

and so on, as you can see the pattern. Now we multiply in another \(D\):

\begin{align} \label{DS} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 \\ \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 2 \cdot 1 & 0 & 0 & 0 \\ 0 & 0 & 3 \cdot 2 & 0 & 0 \\ 0 & 0 & 0 & 4 \cdot 3 & 0 \\ 0 & 0 & 0 & 0 & 5 \cdot 4 \end{pmatrix} \end{align}

of course the \(5 \cdot 4\) isn't actually resultant from the image above, but it is from the infinite version of this process. Now, we can finally subtract \(\lambda\) from this infinite matrix:

\begin{align} \label{lambda} \begin{pmatrix} -\lambda & 0 & 0 & 0 & 0 \\ 0 & 2 \cdot 1 - \lambda & 0 & 0 & 0 \\ 0 & 0 & 3 \cdot 2 - \lambda & 0 & 0 \\ 0 & 0 & 0 & 4 \cdot 3 - \lambda & 0 \\ 0 & 0 & 0 & 0 & 5 \cdot 4 - \lambda \end{pmatrix} \end{align}

Now, you might be wondering how we're going to take the determinant of this infinite matrix. We can take a of finite matrices to find out what the generalization might be. For instance, the 3d case might look like this:

\begin{align} \label{} \lambda^{2}(2\cdot 1 - \lambda) \end{align}

(as the very last diagonal entry in the finite case does not extend infinitely, there is no \(3 \cdot 2\) term). Now taking some higher dimensions:

\begin{align} \label{higher dimensions} \lambda^{2}(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda) \\ \lambda^{2}(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda)(4 \cdot 3 - \lambda) \\ \lambda^{2}(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda)(4 \cdot 3 - \lambda)(5 \cdot 4 - \lambda) \end{align}

using inductive reasoning we should expect the infinite form to be:

\begin{align} \label{} det(M) = -\lambda(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda)(4 \cdot 3 - \lambda)(5 \cdot 4 - \lambda)(6 \cdot 5 - \lambda)\dots \end{align}

(note that it isn't \(\lambda^{2}\) because the very last \(-\lambda\) never gets multiplied, and it's negative for that reason too). Note that if we want to set \(det(M) = 0\), \(\lambda = n(n - 1)\) where \(n\) is a natural number (including zero). Then we substitute back in the \(\lambda\) for some \(n\), let's use \(n = 2\) as an example:

\begin{align} \label{n=2} \begin{pmatrix} -2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 4 & 0 & 0 \\ 0 & 0 & 0 & 10 & 0 \\ 0 & 0 & 0 & 0 & 18 \end{pmatrix} \begin{pmatrix} a_{1} \\ a_{2} \\ a_{3} \\ a_{4} \\ a_{5} \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} \end{align}

clearly, when we choose \(n = 2\), the second value \(a_{2}\) is free, and the rest for the given eigenfunction must be zero, meaning for a given \(n\), the resulting eigenvector is \(ax^{n}\) for any value \(a\). This is one of the solutions to this differential equation.

It turns out there's another solution in a space that the Taylor space does not span, but I'll leave it as an exercise to find the other solution using this method, by extending it to include other kinds of functions. Note that for your eigenbasis one can use the to make a Fourier basis, but that's also easy to generalize.

Copyright © 2024 Preston Pan